- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources5
- Resource Type
-
0005000000000000
- More
- Availability
-
50
- Author / Contributor
- Filter by Author / Creator
-
-
Burns, Collin (5)
-
Steinhardt, Jacob (3)
-
Andoni, Alexandr (2)
-
Li, Yi (2)
-
Mahabadi, Sepideh (2)
-
Basart, Steven (1)
-
Critch, Andrew Critch (1)
-
Hendrycks, Dan (1)
-
Li, Jerry Li (1)
-
Song, Dawn (1)
-
Woodruff, David (1)
-
Woodruff, David P. (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
- Filter by Editor
-
-
null (4)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
null (Ed.)Feature alignment is an approach to improving robustness to distribution shift that matches the distribution of feature activations between the training distribution and test distribution. A particularly simple but effective approach to feature alignment involves aligning the batch normalization statistics between the two distributions in a trained neural network. This technique has received renewed interest lately because of its impressive performance on robustness benchmarks. However, when and why this method works is not well understood. We investigate the approach in more detail and identify several limitations. We show that it only significantly helps with a narrow set of distribution shifts and we identify several settings in which it even degrades performance. We also explain why these limitations arise by pinpointing why this approach can be so effective in the first place. Our findings call into question the utility of this approach and Unsupervised Domain Adaptation more broadly for improving robustness in practice.more » « less
-
Burns, Collin; Steinhardt, Jacob (, IEEE Conference on Computer Vision and Pattern Recognition)
-
Hendrycks, Dan; Burns, Collin; Basart, Steven; Critch, Andrew Critch; Li, Jerry Li; Song, Dawn; Steinhardt, Jacob (, International Conference on Learning Representations)null (Ed.)
-
Andoni, Alexandr; Burns, Collin; Li, Yi; Mahabadi, Sepideh; Woodruff, David (, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2020))null (Ed.)
-
Andoni, Alexandr; Burns, Collin; Li, Yi; Mahabadi, Sepideh; Woodruff, David P. (, APPROX/RANDOM)null (Ed.)
An official website of the United States government

Full Text Available